Goto

Collaborating Authors

 exascale computer


UK to invest £900m in supercomputer in bid to build own 'BritGPT'

The Guardian

The UK government is to invest £900m in a cutting-edge supercomputer as part of an artificial intelligence strategy that includes ensuring the country can build its own "BritGPT". The treasury outlined plans to spend around £900m on building an exascale computer, which would be several times more powerful than the UK's biggest computers, and establishing a new AI research body. An exascale computer can be used for training complex AI models, but also have other uses across science, industry and defence, including modelling weather forecasts and climate projections. The Treasury said the £900m investment will "allow researchers to better understand climate change, power the discovery of new drugs and maximise our potential in AI.". An exascale computer is one that can carry out more than one billion billion simple calculations a second, a metric known as an "exaflops".


The US may have just pulled even with China in the race to build supercomputing's next big thing

MIT Technology Review

There was much celebrating in America last month when the US Department of Energy unveiled Summit, the world's fastest supercomputer. Now the race is on to achieve the next significant milestone in processing power: exascale computing. This involves building a machine within the next few years that's capable of a billion billion calculations per second, or one exaflop, which would make it five times faster than Summit (see chart). Every person on Earth would have to do a calculation every second of every day for just over four years to match what an exascale machine will be able to do in a flash. This phenomenal power will enable researchers to run massively complex simulations that spark advances in many fields, from climate science to genomics, renewable energy, and artificial intelligence.


Intel Gets Serious About Neuromorphic, Cognitive Computing Future

#artificialintelligence

Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side. However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation--an emphasis on neuromorphic devices and what Intel is openly calling "cognitive computing" (a term used primarily--and heavily--for IBM's Watson-driven AI technologies). This is the first time to date we've heard the company make any definitive claims about where neuromorphic chips might fit into a strategy to capture machine learning, and marks a bold grab for the term "cognitive computing" which has been an umbrella term for Big Blue's AI business. Intel has been developing neuromorphic devices for some time, with one of the first prototypes that was well known in 2012.


Intel Gets Serious About Neuromorphic, Cognitive Computing Future

#artificialintelligence

Like all hardware device makers eager to meet the newest market opportunity, Intel is placing multiple bets on the future of machine learning hardware. The chipmaker has already cast its Xeon Phi and future integrated Nervana Systems chips into the deep learning pool while touting regular Xeons to do the heavy lifting on the inference side. However, a recent conversation we had with Intel turned up a surprising new addition to the machine learning conversation--an emphasis on neuromorphic devices and what Intel is openly calling "cognitive computing" (a term used primarily--and heavily--for IBM's Watson-driven AI technologies). This is the first time to date we've heard the company make any definitive claims about where neuromorphic chips might fit into a strategy to capture machine learning, and marks a bold grab for the term "cognitive computing" which has been an umbrella term for Big Blue's AI business. Intel has been developing neuromorphic devices for some time, with one of the first prototypes that was well known in 2012.


Long Promised Artificial Intelligence Is Looming--and It's Going to Be Amazing

#artificialintelligence

We have been hearing predictions for decades of a takeover of the world by artificial intelligence. In 1957, Herbert A. Simon predicted that within 10 years a digital computer would be the world's chess champion. That didn't happen until 1996. And despite Marvin Minsky's 1970 prediction that "in from three to eight years we will have a machine with the general intelligence of an average human being," we still consider that a feat of science fiction. The pioneers of artificial intelligence were surely off on the timing, but they weren't wrong; AI is coming.